military domain
AutoDetect: Designing an Autoencoder-based Detection Method for Poisoning Attacks on Object Detection Applications in the Military Domain
Liezenga, Alma M., Wijnja, Stefan, de Haan, Puck, Brink, Niels W. T., van Stijn, Jip J., Kamphuis, Yori, Schutte, Klamer
Poisoning attacks pose an increasing threat to the security and robustness of Artificial Intelligence systems in the military domain. The widespread use of open-source datasets and pretrained models exacerbates this risk. Despite the severity of this threat, there is limited research on the application and detection of poisoning attacks on object detection systems. This is especially problematic in the military domain, where attacks can have grave consequences. In this work, we both investigate the effect of poisoning attacks on military object detectors in practice, and the best approach to detect these attacks. To support this research, we create a small, custom dataset featuring military vehicles: MilCivVeh. We explore the vulnerability of military object detectors for poisoning attacks by implementing a modified version of the BadDet attack: a patch-based poisoning attack. We then assess its impact, finding that while a positive attack success rate is achievable, it requires a substantial portion of the data to be poisoned -- raising questions about its practical applicability. To address the detection challenge, we test both specialized poisoning detection methods and anomaly detection methods from the visual industrial inspection domain. Since our research shows that both classes of methods are lacking, we introduce our own patch detection method: AutoDetect, a simple, fast, and lightweight autoencoder-based method. Our method shows promising results in separating clean from poisoned samples using the reconstruction error of image slices, outperforming existing methods, while being less time- and memory-intensive. We urge that the availability of large, representative datasets in the military domain is a prerequisite to further evaluate risks of poisoning attacks and opportunities patch detection.
- Europe > Netherlands > South Holland > The Hague (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Human-centred test and evaluation of military AI
Helmer, David, Boardman, Michael, Conroy, S. Kate, Hepworth, Adam J., Harjani, Manoj
The REAIM 2024 Blueprint for Action states that AI applications in the military domain should be ethical and human-centric and that humans must remain responsible and accountable for their use and effects. Developing rigorous test and evaluation, verification and validation (TEVV) frameworks will contribute to robust oversight mechanisms. TEVV in the development and deployment of AI systems needs to involve human users throughout the lifecycle. Traditional human-centred test and evaluation methods from human factors need to be adapted for deployed AI systems that require ongoing monitoring and evaluation. The language around AI-enabled systems should be shifted to inclusion of the human(s) as a component of the system. Standards and requirements supporting this adjusted definition are needed, as are metrics and means to evaluate them. The need for dialogue between technologists and policymakers on human-centred TEVV will be evergreen, but dialogue needs to be initiated with an objective in mind for it to be productive. Development of TEVV throughout system lifecycle is critical to support this evolution including the issue of human scalability and impact on scale of achievable testing. Communication between technical and non technical communities must be improved to ensure operators and policy-makers understand risk assumed by system use and to better inform research and development. Test and evaluation in support of responsible AI deployment must include the effect of the human to reflect operationally realised system performance. Means of communicating the results of TEVV to those using and making decisions regarding the use of AI based systems will be key in informing risk based decisions regarding use.
- Oceania > Australia > Queensland (0.04)
- North America > United States (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > Singapore (0.04)
- Law (1.00)
- Government > Military (1.00)
Visuals of AI in the military domain: beyond 'killer robots' and towards better images?
In this blog post, Anna Nadibaidze explores the main themes found across common visuals of AI in the military domain. Inspired by the work and mission of Better Images of AI, she argues for the need to discuss and find alternatives to images of humanoid'killer robots'. Anna holds a PhD in Political Science from the University of Southern Denmark (SDU) and is a researcher for the AutoNorms project, based at SDU. The integration of artificial intelligence (AI) technologies into the military domain, especially weapon systems and the process of using force, has been the topic of international academic, policy, and regulatory debates for more than a decade. The visual aspect of these discussions, however, has not been analysed in depth. This is both puzzling, considering the role that images play in shaping parts of the discourses on AI in warfare, and potentially problematic, given that many of these visuals, as I explore below, misrepresent major issues at stake in the debate.
- North America > United States (1.00)
- Europe > Denmark > Southern Denmark (0.25)
- Europe > Austria > Vienna (0.05)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
CMNEE: A Large-Scale Document-Level Event Extraction Dataset based on Open-Source Chinese Military News
Zhu, Mengna, Xu, Zijie, Zeng, Kaisheng, Xiao, Kaiming, Wang, Mao, Ke, Wenjun, Huang, Hongbin
Extracting structured event knowledge, including event triggers and corresponding arguments, from military texts is fundamental to many applications, such as intelligence analysis and decision assistance. However, event extraction in the military field faces the data scarcity problem, which impedes the research of event extraction models in this domain. To alleviate this problem, we propose CMNEE, a large-scale, document-level open-source Chinese Military News Event Extraction dataset. It contains 17,000 documents and 29,223 events, which are all manually annotated based on a pre-defined schema for the military domain including 8 event types and 11 argument role types. We designed a two-stage, multi-turns annotation strategy to ensure the quality of CMNEE and reproduced several state-of-the-art event extraction models with a systematic evaluation. The experimental results on CMNEE fall shorter than those on other domain datasets obviously, which demonstrates that event extraction for military domain poses unique challenges and requires further research efforts. Our code and data can be obtained from https://github.com/Mzzzhu/CMNEE.
- Asia > Russia (0.14)
- North America > United States (0.14)
- Asia > Afghanistan (0.05)
- (17 more...)
- Government > Military (1.00)
- Government > Regional Government > Asia Government > China Government (0.60)
- Information Technology > Software (0.84)
- Information Technology > Data Science > Data Mining (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Kamala Harris to call for urgent action on AI threat to democracy and privacy
Short-term threats posed by artificial intelligence to democracy and privacy need to be addressed as urgently as longer term existential threats, Kamala Harris, the US vice-president, is expected to say in a speech setting out the Biden administration's vision before the UK's Bletchley Park summit on AI. In a speech in London on Wednesday before attending the conference, she will say: "We reject the false choice that suggests we can either protect the public or advance innovation. We can – and we must – do both. And we must do so swiftly, as this technology rapidly advances." Harris wants to advance beyond the debates about the future potential, sometimes speculative, existential threats posed by AI in the future to examine harms that are already happening, including those associated with discrimination and disinformation.
- North America > United States (1.00)
- North America > Canada > Ontario > Middlesex County > London (0.25)
- Europe > United Kingdom > England > Buckinghamshire > Milton Keynes (0.25)